392 research outputs found
Dynamic Control of Explore/Exploit Trade-Off In Bayesian Optimization
Bayesian optimization offers the possibility of optimizing black-box
operations not accessible through traditional techniques. The success of
Bayesian optimization methods such as Expected Improvement (EI) are
significantly affected by the degree of trade-off between exploration and
exploitation. Too much exploration can lead to inefficient optimization
protocols, whilst too much exploitation leaves the protocol open to strong
initial biases, and a high chance of getting stuck in a local minimum.
Typically, a constant margin is used to control this trade-off, which results
in yet another hyper-parameter to be optimized. We propose contextual
improvement as a simple, yet effective heuristic to counter this - achieving a
one-shot optimization strategy. Our proposed heuristic can be swiftly
calculated and improves both the speed and robustness of discovery of optimal
solutions. We demonstrate its effectiveness on both synthetic and real world
problems and explore the unaccounted for uncertainty in the pre-determination
of search hyperparameters controlling explore-exploit trade-off.Comment: Accepted for publication in the proceedings of 2018 Computing
Conferenc
The filtering equations revisited
The problem of nonlinear filtering has engendered a surprising number of
mathematical techniques for its treatment. A notable example is the
change-of--probability-measure method originally introduced by Kallianpur and
Striebel to derive the filtering equations and the Bayes-like formula that
bears their names. More recent work, however, has generally preferred other
methods. In this paper, we reconsider the change-of-measure approach to the
derivation of the filtering equations and show that many of the technical
conditions present in previous work can be relaxed. The filtering equations are
established for general Markov signal processes that can be described by a
martingale-problem formulation. Two specific applications are treated
Links between traumatic brain injury and ballistic pressure waves originating in the thoracic cavity and extremities
Identifying patients at risk of traumatic brain injury (TBI) is important
because research suggests prophylactic treatments to reduce risk of long-term
sequelae. Blast pressure waves can cause TBI without penetrating wounds or
blunt force trauma. Similarly, bullet impacts distant from the brain can
produce pressure waves sufficient to cause mild to moderate TBI. The fluid
percussion model of TBI shows that pressure impulses of 15-30 psi cause mild to
moderate TBI in laboratory animals. In pigs and dogs, bullet impacts to the
thigh produce pressure waves in the brain of 18-45 psi and measurable injury to
neurons and neuroglia. Analyses of research in goats and epidemiological data
from shooting events involving humans show high correlations (r > 0.9) between
rapid incapacitation and pressure wave magnitude in the thoracic cavity. A case
study has documented epilepsy resulting from a pressure wave without the bullet
directly hitting the brain. Taken together, these results support the
hypothesis that bullet impacts distant from the brain produce pressure waves
that travel to the brain and can retain sufficient magnitude to induce brain
injury. The link to long-term sequelae could be investigated via
epidemiological studies of patients who were gunshot in the chest to determine
whether they experience elevated rates of epilepsy and other neurological
sequelae
Bayesian optimization for materials design
We introduce Bayesian optimization, a technique developed for optimizing
time-consuming engineering simulations and for fitting machine learning models
on large datasets. Bayesian optimization guides the choice of experiments
during materials design and discovery to find good material designs in as few
experiments as possible. We focus on the case when materials designs are
parameterized by a low-dimensional vector. Bayesian optimization is built on a
statistical technique called Gaussian process regression, which allows
predicting the performance of a new design based on previously tested designs.
After providing a detailed introduction to Gaussian process regression, we
introduce two Bayesian optimization methods: expected improvement, for design
problems with noise-free evaluations; and the knowledge-gradient method, which
generalizes expected improvement and may be used in design problems with noisy
evaluations. Both methods are derived using a value-of-information analysis,
and enjoy one-step Bayes-optimality
On Bayesian Search for the Feasible Space Under Computationally Expensive Constraints
We are often interested in identifying the feasible subset of a decision space under multiple constraints to permit effective design exploration. If determining feasibility required computationally expensive simulations, the cost of exploration would be prohibitive. Bayesian search is data-efficient for such problems: starting from a small dataset, the central concept is to use Bayesian models of constraints with an acquisition function to locate promising solutions that may improve predictions of feasibility when the dataset is augmented. At the end of this sequential active learning approach with a limited number of expensive evaluations, the models can accurately predict the feasibility of any solution obviating the need for full simulations. In this paper, we propose a novel acquisition function that combines the probability that a solution lies at the boundary between feasible and infeasible spaces (representing exploitation) and the entropy in predictions (representing exploration). Experiments confirmed the efficacy of the proposed function
Quadratic optimal functional quantization of stochastic processes and numerical applications
In this paper, we present an overview of the recent developments of
functional quantization of stochastic processes, with an emphasis on the
quadratic case. Functional quantization is a way to approximate a process,
viewed as a Hilbert-valued random variable, using a nearest neighbour
projection on a finite codebook. A special emphasis is made on the
computational aspects and the numerical applications, in particular the pricing
of some path-dependent European options.Comment: 41 page
Deletion of the GABAA α2-subunit does not alter self dministration of cocaine or reinstatement of cocaine seeking
Rationale
GABAA receptors containing α2-subunits are highly represented in brain areas that are involved in motivation and reward, and have been associated with addiction to several drugs, including cocaine. We have shown previously that a deletion of the α2-subunit results in an absence of sensitisation to cocaine.
Objective
We investigated the reinforcing properties of cocaine in GABAA α2-subunit knockout (KO) mice using an intravenous self-administration procedure.
Methods
α2-subunit wildtype (WT), heterozygous (HT) and KO mice were trained to lever press for a 30 % condensed milk solution. After implantation with a jugular catheter, mice were trained to lever press for cocaine (0.5 mg/kg/infusion) during ten daily sessions. Responding was extinguished and the mice tested for cue- and cocaine-primed reinstatement. Separate groups of mice were trained to respond for decreasing doses of cocaine (0.25, 0.125, 0.06 and 0.03 mg/kg).
Results
No differences were found in acquisition of lever pressing for milk. All genotypes acquired self-administration of cocaine and did not differ in rates of self-administration, dose dependency or reinstatement. However, whilst WT and HT mice showed a dose-dependent increase in lever pressing during the cue presentation, KO mice did not.
Conclusions
Despite a reported absence of sensitisation, motivation to obtain cocaine remains unchanged in KO and HT mice. Reinstatement of cocaine seeking by cocaine and cocaine-paired cues is also unaffected. We postulate that whilst not directly involved in reward perception, the α2-subunit may be involved in modulating the “energising” aspect of cocaine’s effects on reward-seeking
Surface electrons at plasma walls
In this chapter we introduce a microscopic modelling of the surplus electrons
on the plasma wall which complements the classical description of the plasma
sheath. First we introduce a model for the electron surface layer to study the
quasistationary electron distribution and the potential at an unbiased plasma
wall. Then we calculate sticking coefficients and desorption times for electron
trapping in the image states. Finally we study how surplus electrons affect
light scattering and how charge signatures offer the possibility of a novel
charge measurement for dust grains.Comment: To appear in Complex Plasmas: Scientific Challenges and Technological
Opportunities, Editors: M. Bonitz, K. Becker, J. Lopez and H. Thomse
- …